137 research outputs found

    A consistent and fault-tolerant data store for software defined networks

    Get PDF
    Tese de mestrado em Segurança Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2013O sucesso da Internet é indiscutível. No entanto, desde há muito tempo que são feitas sérias críticas à sua arquitectura. Investigadores acreditam que o principal problema dessa arquitectura reside no facto de os dispositivos de rede incorporarem funções distintas e complexas que vão além do objectivo de encaminhar pacotes, para o qual foram criados [1]. O melhor exemplo disso são os protocolos distribuídos (e complexos) de encaminhamento, que os routers executam de forma a conseguir garantir o encaminhamento de pacotes. Algumas das consequências disso são a complexidade das redes tradicionais tanto em termos de inovação como de manutenção. Como resultado, temos redes dispendiosas e pouco resilientes. De forma a resolver este problema uma arquitectura de rede diferente tem vindo a ser adoptada, tanto pela comunidade científica como pela indústria. Nestas novas redes, conhecidas como Software Defined Networks (SDN), há uma separação física entre o plano de controlo do plano de dados. Isto é, toda a lógica e estado de controlo da rede é retirada dos dispositivos de rede, para passar a ser executada num controlador logicamente centralizado que com uma visão global, lógica e coerente da rede, consegue controlar a mesma de forma dinâmica. Com esta delegação de funções para o controlador os dispositivos de rede podem dedicar-se exclusivamente à sua função essencial de encaminhar pacotes de dados. Assim sendo, os dipositivos de redes permanecem simples e mais baratos, e o controlador pode implementar funções de controlo simplificadas (e possivelmente mais eficazes) graças à visão global da rede. No entanto um modelo de programação logicamente centralizado não implica um sistema centralizado. De facto, a necessidade de garantir níveis adequados de performance, escalabilidade e resiliência, proíbem que o plano de controlo seja centralizado. Em vez disso, as redes de SDN que operam a nível de produção utilizam planos de controlo distribuídos e os arquitectos destes sistemas têm que enfrentar os trade-offs fundamentais associados a sistemas distribuídos. Nomeadamente o equilíbrio adequado entre coerência e disponibilidade do sistema. Neste trabalho nós propomos uma arquitectura de um controlador distribuído, tolerante a faltas e coerente. O elemento central desta arquitectura é uma base de dados replicada e tolerante a faltas que mantém o estado da rede coerente, de forma a garantir que as aplicações de controlo da rede, que residem no controlador, possam operar com base numa visão coerente da rede que garanta coordenação, e consequentemente simplifique o desenvolvimento das aplicações. A desvantagem desta abordagem reflecte-se no decréscimo de performance, que limita a capacidade de resposta do controlador, e também a escalabilidade do mesmo. Mesmo assumindo estas consequências, uma conclusão importante do nosso estudo é que é possível atingir os objectivos propostos (i.e., coerência forte e tolerância a faltas) e manter a performance a um nível aceitável para determinados tipo de redes. Relativamente à tolerância a faltas, numa arquitectura SDN estas podem ocorrer em três domínios diferentes: o plano de dados (falhas do equipamento de rede), o plano de controlo (falhas da ligação entre o controlador e o equipamento de rede) e, finalmente, o próprio controlador. Este último é de uma importância particular, sendo que a falha do mesmo pode perturbar a rede por inteiro (i.e., deixando de existir conectividade entre os hosts). É portanto essencial que as redes de SDN que operam a nível de produção possuam mecanismos que possam lidar com os vários tipos de faltas e garantir disponibilidade perto de 100%. O trabalho recente em SDN têm explorado a questão da coerência a níveis diferentes. Linguagens de programação como a Frenetic [2] oferecem coerência na composição de políticas de rede, conseguindo resolver incoerências nas regras de encaminhamento automaticamente. Outra linha de trabalho relacionado propõe abstracções que garantem a coerência da rede durante a alteração das tabelas de encaminhamento do equipamento. O objectivo destes dois trabalhos é garantir a coerência depois de decidida a política de encaminhamento. O Onix (um controlador de SDN muitas vezes referenciado [3]) garante um tipo de coerência diferente: uma que é importante antes da política de encaminhamento ser tomada. Este controlador oferece dois tipos de coerência na salvaguarda do estado da rede: coerência eventual, e coerência forte. O nosso trabalho utiliza apenas coerência forte, e consegue demonstrar que esta pode ser garantida com uma performance superior à garantida pelo Onix. Actualmente, os controladores de SDN distribuídos (Onix e HyperFlow [4]) utilizam modelos de distribuição não transparentes, com propriedades fracas como coerência eventual que exigem maior cuidado no desenvolvimento de aplicações de controlo de rede no controlador. Isto deve-se à ideia (do nosso ponto de vista infundada) de que propriedades como coerência forte limitam significativamente a escalabilidade do controlador. No entanto um controlador com coerência forte traduz-se num modelo de programação mais simples e transparente à distribuição do controlador. Neste trabalho nós argumentámos que é possível utilizar técnicas bem conhecidas de replicação baseadas na máquina de estados distribuída [5], para construir um controlador SDN, que não só garante tolerância a faltas e coerência forte, mas também o faz com uma performance aceitável. Neste sentido a principal contribuição desta dissertação é mostrar que uma base de dados construída com as técnicas mencionadas anteriormente (como as providenciadas pelo BFT-SMaRt [6]), e integrada com um controlador open-source existente (como o Floodlight1), consegue lidar com vários tipos de carga, provenientes de aplicações de controlo de rede, eficientemente. As contribuições principais do nosso trabalho, podem ser resumidas em: 1. A proposta de uma arquitectura de um controlador distribuído baseado nas propriedades de coerência forte e tolerância a faltas; 2. Como a arquitectura proposta é baseada numa base de dados replicada, nós realizamos um estudo da carga produzida por três aplicações na base dados. 3. Para avaliar a viabilidade da nossa arquitectura nós analisamos a capacidade do middleware de replicação para processar a carga mencionada no ponto anterior. Este estudo descobre as seguintes variáveis: (a) Quantos eventos por segundo consegue o middleware processar por segundo; (b) Qual o impacto de tempo (i.e., latência) necessário para processar tais eventos; para cada uma das aplicações mencionadas, e para cada um dos possíveis eventos de rede processados por essas aplicações. Estas duas variáveis são importantes para entender a escalabilidade e performance da arquitectura proposta. Do nosso trabalho, nomeadamente do nosso estudo da carga das aplicações (numa primeira versão da nossa integração com a base de dados) e da capacidade do middleware resultou uma publicação: Fábio Botelho, Fernando Ramos, Diego Kreutz and Alysson Bessani; On the feasibility of a consistent and fault-tolerant data store for SDNs, in Second European Workshop on Software Defined Networks, Berlin, October 2013. Entretanto, nós submetemos esta dissertação cerca de cinco meses depois desse artigo, e portanto, contém um estudo muito mais apurado e melhorado.Even if traditional data networks are very successful, they exhibit considerable complexity manifested in the configuration of network devices, and development of network protocols. Researchers argue that this complexity derives from the fact that network devices are responsible for both processing control functions such as distributed routing protocols and forwarding packets. This work is motivated by the emergent network architecture of Software Defined Networks where the control functionality is removed from the network devices and delegated to a server (usually called controller) that is responsible for dynamically configuring the network devices present in the infrastructure. The controller has the advantage of logically centralizing the network state in contrast to the previous model where state was distributed across the network devices. Despite of this logical centralization, the control plane (where the controller operates) must be distributed in order to avoid being a single point of failure. However, this distribution introduces several challenges due to the heterogeneous, asynchronous, and faulty environment where the controller operates. Current distributed controllers lack transparency due to the eventual consistency properties employed in the distribution of the controller. This results in a complex programming model for the development of network control applications. This work proposes a fault-tolerant distributed controller with strong consistency properties that allows a transparent distribution of the control plane. The drawback of this approach is the increase in overhead and delay, which limits responsiveness and scalability. However, despite being fault-tolerant and strongly consistent, we show that this controller is able to provide performance results (in some cases) superior to those available in the literature

    Predicting multiple sclerosis disease severity with multimodal deep neural networks

    Full text link
    Multiple Sclerosis (MS) is a chronic disease developed in human brain and spinal cord, which can cause permanent damage or deterioration of the nerves. The severity of MS disease is monitored by the Expanded Disability Status Scale (EDSS), composed of several functional sub-scores. Early and accurate classification of MS disease severity is critical for slowing down or preventing disease progression via applying early therapeutic intervention strategies. Recent advances in deep learning and the wide use of Electronic Health Records (EHR) creates opportunities to apply data-driven and predictive modeling tools for this goal. Previous studies focusing on using single-modal machine learning and deep learning algorithms were limited in terms of prediction accuracy due to the data insufficiency or model simplicity. In this paper, we proposed an idea of using patients' multimodal longitudinal and longitudinal EHR data to predict multiple sclerosis disease severity at the hospital visit. This work has two important contributions. First, we describe a pilot effort to leverage structured EHR data, neuroimaging data and clinical notes to build a multi-modal deep learning framework to predict patient's MS disease severity. The proposed pipeline demonstrates up to 25% increase in terms of the area under the Area Under the Receiver Operating Characteristic curve (AUROC) compared to models using single-modal data. Second, the study also provides insights regarding the amount useful signal embedded in each data modality with respect to MS disease prediction, which may improve data collection processes

    The real-time molecular characterisation of human brain tumours during surgery using Rapid Evaporative Ionization Mass Spectrometry [REIMS] and Raman spectroscopy: a platform for precision medicine in neurosurgery

    Get PDF
    Aim: To investigate new methods for the chemical detection of tumour tissue during neurosurgery. Rationale: Surgeons operating on brain tumours currently lack the ability to directly and immediately assess the presence of tumour tissue to help guide resection. Through developing a first in human application of new technology we hope to demonstrate the proof of concept that chemical detection of tumour tissue is possible. It will be further demonstrated that information can be obtained to potentially aid treatment decisions. This new technology could, therefore, become a platform for more effective surgery and introducing precision medicine to Neurosurgery. Methods: Molecular analysis was performed using Raman spectroscopy and Rapid Evaporative Ionization Mass Spectrometry (REIMS). These systems were first developed for use in brain surgery. A single centre prospective observational study of both modalities was designed involving a total of 75 patients undergoing craniotomy and resection of a range of brain tumours. A neuronavigation system was used to register spectral readings in 3D space. Precise intraoperative readings from different tumour zones were taken and compared to matched core biopsy samples verified by routine histopathology. Results: Multivariate statistics including PCA/LDA analysis was used to analyse the spectra obtained and compare these to the histological data. The systems identified normal versus tumour tissue, tumour grade, tumour type, tumour density and tissue status of key markers of gliomagenesis. Conclusions: The work in this thesis provides proof of concept that useful real time intraoperative spectroscopy is possible. It can integrate well with the current operating room setup to provide key information which could potentially enhance surgical safety and effectiveness in increasing extent of resection. The ability to group tissue samples with respect to genomic data opens up the possibility of using this information during surgery to speed up treatment, escalate/deescalate surgery in specific phenotypic groups to introduce precision medicine to Neurosurgery.Open Acces

    A Comparative Analysis of Transmission Control Protocol Improvement Techniques over Space-Based Transmission Media

    Get PDF
    The purpose of this study was to assess the throughput improvement afforded by the various TCP optimization techniques, with respect to a simulated geosynchronous satellite system, to provide a cost justification for the implementation of a given enhancement technique. The research questions were answered through model and simulation of a satellite transmission system via a Linux-based network topology; results of the simulation were analyzed primarily via a non-parametric method to ascertain performance differences between the various TCP optimization techniques. It was determined that each technique studied, which included the Space Communication Protocol Standard-Transport Protocol (SCPS-TP), window scale, selective acknowledgements (SACKs), and combinational use of the window scale and SACK mechanisms, provided varying levels of improvement as compared to a standard TCP implementation. In terms of throughput, SCPS-TP provided the greatest overall improvement, with window scale and window scale/SACK techniques providing significant benefits at low levels of bit error rate (BER). The SACK modification improved throughput performance at high levels of BER, but performed at levels comparable to standard TCP during scenarios with lower BER levels. These findings will be of assistance to communications planners in deciding whether or not to implement a given enhancement or deciding which technique to utilize

    Exploiting behavioral biometrics for user security enhancements

    Get PDF
    As online business has been very popular in the past decade, the tasks of providing user authentication and verification have become more important than before to protect user sensitive information from malicious hands. The most common approach to user authentication and verification is the use of password. However, the dilemma users facing in traditional passwords becomes more and more evident: users tend to choose easy-to-remember passwords, which are often weak passwords that are easy to crack. Meanwhile, behavioral biometrics have promising potentials in meeting both security and usability demands, since they authenticate users by who you are , instead of what you have . In this dissertation, we first develop two such user verification applications based on behavioral biometrics: the first one is via mouse movements, and the second via tapping behaviors on smartphones; then we focus on modeling user web browsing behaviors by Fitts\u27 Law.;Specifically, we develop a user verification system by exploiting the uniqueness of people\u27s mouse movements. The key feature of our system lies in using much more fine-grained (point-by-point) angle-based metrics of mouse movements for user verification. These new metrics are relatively unique from person to person and independent of the computing platform. We conduct a series of experiments to show that the proposed system can verify a user in an accurate and timely manner, and induced system overhead is minor. Similar to mouse movements, the tapping behaviors of smartphone users on touchscreen also vary from person to person. We propose a non-intrusive user verification mechanism to substantiate whether an authenticating user is the true owner of the smartphone or an impostor who happens to know the passcode. The effectiveness of the proposed approach is validated through real experiments. to further understand user pointing behaviors, we attempt to stress-test Fitts\u27 law in the wild , namely, under natural web browsing environments, instead of restricted laboratory settings in previous studies. Our analysis shows that, while the averaged pointing times follow Fitts\u27 law very well, there is considerable deviations from Fitts\u27 law. We observe that, in natural browsing, a fast movement has a different error model from the other two movements. Therefore, a complete profiling on user pointing performance should be done in more details, for example, constructing different error models for slow and fast movements. as future works, we plan to exploit multiple-finger tappings for smartphone user verification, and evaluate user privacy issues in Amazon wish list

    Developing silicon pixel detectors for LHCb: constructing the VELO Upgrade and developing a MAPS-based tracking detector

    Get PDF
    The Large Hadron Collider beauty (LHCb) experiment is currently undergoing a major upgrade of its detector, including the construction of a new silicon pixel detector, the Vertex Locator (VELO) Upgrade. The challenges faced by the LHCb VELO Upgrade are discussed, and the design to overcome them is presented. VELO modules have been produced at the University of Manchester. The VELO modules use 55 μ\mum pixels operating 5.1 mm from the beam without a beam pipe, an innovative silicon microchannel cooling substrate, and 40 MHz readout with a full detector bandwidth of 3 Tb/s. The module assembly process and the results of the associated R&D are presented. The mechanical and electronic tests are described. A grading scheme for each test is described, and the results are presented. The majority of the modules are of excellent quality, with 40 out of 43 of suitable quality for installation in the experiment. A full set of modules for the experiment has now been produced. The VELO Upgrade is read out into a data acquisition system based on an FPGA board. The architecture of the readout firmware for the readout FPGA for the VELO Upgrade is presented, and the function of each block described. Challenges arise due to the design of the VeloPix front end chip, the fully-software trigger and real-time analysis paradigm. These challenges are discussed and their solutions briefly described. An algorithm for identifying isolated clusters is presented and previously-considered approaches discussed. The current design uses around 83 % of the available logic blocks, and 85 % of the available memory blocks. A complete version of the firmware is now available and is being refined. An ultimate version of the LHCb experiment, the LHCb Upgrade II, is being designed for the 2030s to fully exploit the potential of the high luminosity LHC. The Mighty Tracker is the proposed new combined-technology downstream tracker for Upgrade II, consisting of a silicon pixel inner region and a scintillating fibre outer region. A potential layout of the detector and modules is given. The silicon pixels will likely be the first LHC tracker based on radiation-hard HV-MAPS technology. Studies for the electronic readout system of the silicon inner region are reported. The total bandwidth and its distribution across the tracker are discussed. The numbers of key readout and FPGA DAQ boards are calculated. The detector's expected data rate is 8.13 Tb/s in Upgrade II conditions over a total of more than 46,000 front end chips

    DEVELOPMENT OF A DECISION SUPPORT SYSTEM FOR CAPACITY PLANNING FROM GRAIN HARVEST TO STORAGE

    Get PDF
    This dissertation investigated issues surrounding grain harvest and transportation logistics. A discrete event simulation model of grain transportation from the field to an on-farm storage facility was developed to evaluate how truck and driver resource constraints impact material flow efficiency, resource utilization, and system throughput. Harvest rate and in-field transportation were represented as a stochastic entity generation process, and service times associated with various material handling steps were represented by a combination of deterministic times and statistical distributions. The model was applied to data collected for three distinct harvest scenarios (18 total days). The observed number of deliveries was within ± 2 standard deviations of the simulation mean for 15 of the 18 input conditions examined, and on a daily basis, the median error between the simulated and observed deliveries was -4.1%. The model was expanded to simulate the whole harvest season and include temporary wet storage capacity and grain drying. Moisture content changes due to field dry down was modeled using weather data and grain equilibrium moisture content relationships and resulted in an RMSE of 0.73 pts. Dryer capacity and performance were accounted for by adjusting the specified dryer performance to the observed level of moisture removal and drying temperature. Dryer capacity was generally underpredicted, and large variations were found in the observed data. The expanded model matched the observed cumulative mass of grain delivered well and estimated the harvest would take one partial day longer than was observed. Usefulness of the model to evaluate both costs and system performance was demonstrated by conducting a sensitivity analysis and examining system changes for a hypothetical operation. A dry year and a slow drying crop had the largest impact on the system’s operating and drying costs (12.7% decrease and 10.8% increase, respectively). The impact of reducing the drying temperature to maintain quality in drying white corn had no impact on the combined drying and operating cost, but harvest took six days longer. The reduced drying capacity at lower temperatures resulted in more field drying which counteracted the reduced drying efficiency and increased field time. The sensitivity analysis demonstrated varied benefits of increased drying and transportation capacity based on how often these systems created a bottleneck in the operation. For some combinations of longer transportation times and higher harvest rates, increasing hauling and drying capacity could shorten the harvest window by a week or more at an increase in costs of less than $12 ha-1. An additional field study was conducted to examine corn harvest losses in Kentucky. Total losses for cooperator combines were found to be between 0.8%-2.4% of total yield (86 to 222 kg ha-1). On average, the combine head accounted for 66% of the measured losses, and the total losses were highly variable, with coefficients of variation ranging from 21.7% to 77.2%. Yield and harvest losses were monitored in a single field as the grain dried from 33.9% to 14.6%. There was no significant difference in the potential yield at any moisture level, and the observed yield and losses displayed little variation for moisture levels from 33.9% to 19.8%, with total losses less than 1% (82 to 130 kg dry matter ha-1). Large amounts of lodging occurred while the grain dried from 19.8% to 14.6%, which resulted in an 18.9% reduction in yield, and harvest losses in excess of 9%. Allowing the grain to field dry generally improved test weight and reduced mechanical damage, however, there was a trend of increased mold and other damage in prolonged field drying
    corecore